Goto

Collaborating Authors

 likelihood function


Local Maxima in the Likelihood of Gaussian Mixture Models: Structural Results and Algorithmic Consequences

Neural Information Processing Systems

We provide two fundamental results on the population (infinite-sample) likelihood function of Gaussian mixture models with $M \geq 3$ components. Our first main result shows that the population likelihood function has bad local maxima even in the special case of equally-weighted mixtures of well-separated and spherical Gaussians. We prove that the log-likelihood value of these bad local maxima can be arbitrarily worse than that of any global optimum, thereby resolving an open question of Srebro (2007). Our second main result shows that the EM algorithm (or a first-order variant of it) with random initialization will converge to bad critical points with probability at least $1-e^{-\Omega(M)}$. We further establish that a first-order variant of EM will not converge to strict saddle points almost surely, indicating that the poor performance of the first-order method can be attributed to the existence of bad local maxima rather than bad saddle points. Overall, our results highlight the necessity of careful initialization when using the EM algorithm in practice, even when applied in highly favorable settings.






Bayesian Inference of Temporal Task Specifications from Demonstrations

Ankit Shah, Pritish Kamath, Julie A. Shah, Shen Li

Neural Information Processing Systems

Temporal logics have been used in prior research as a language forexpressing desirable system behaviors, and canimprovetheinterpretability ofspecifications if expressed as compositions of simpler templates (akin to those described by Dwyer et al. [2]).




DeepEvidentialRegression

Neural Information Processing Systems

Deterministic neural networks (NNs) are increasingly being deployed in safety critical domains, where calibrated, robust, and efficient measures of uncertainty are crucial. In this paper,we propose anovelmethod for training non-Bayesian NNs to estimate a continuous target as well as its associated evidence in order tolearn both aleatoric andepistemic uncertainty.


estimated bythenormalized sum Pn i=1wig(Xi) / Pn i=1wi,wherewi =f(Xi)/qi 1(Xi)are

Neural Information Processing Systems

A key object in sequential simulation is the sequence of distributions, called the policy, fromwhich togenerate therandom variables, called particles, usedtoapproximate theintegralsof interest.